40 research outputs found

    On local search and LP and SDP relaxations for k-Set Packing

    Get PDF
    Set packing is a fundamental problem that generalises some well-known combinatorial optimization problems and knows a lot of applications. It is equivalent to hypergraph matching and it is strongly related to the maximum independent set problem. In this thesis we study the k-set packing problem where given a universe U and a collection C of subsets over U, each of cardinality k, one needs to find the maximum collection of mutually disjoint subsets. Local search techniques have proved to be successful in the search for approximation algorithms, both for the unweighted and the weighted version of the problem where every subset in C is associated with a weight and the objective is to maximise the sum of the weights. We make a survey of these approaches and give some background and intuition behind them. In particular, we simplify the algebraic proof of the main lemma for the currently best weighted approximation algorithm of Berman ([Ber00]) into a proof that reveals more intuition on what is really happening behind the math. The main result is a new bound of k/3 + 1 + epsilon on the integrality gap for a polynomially sized LP relaxation for k-set packing by Chan and Lau ([CL10]) and the natural SDP relaxation [NOTE: see page iii]. We provide detailed proofs of lemmas needed to prove this new bound and treat some background on related topics like semidefinite programming and the Lovasz Theta function. Finally we have an extended discussion in which we suggest some possibilities for future research. We discuss how the current results from the weighted approximation algorithms and the LP and SDP relaxations might be improved, the strong relation between set packing and the independent set problem and the difference between the weighted and the unweighted version of the problem.Comment: There is a mistake in the following line of Theorem 17: "As an induced subgraph of H with more edges than vertices constitutes an improving set". Therefore, the proofs of Theorem 17, and hence Theorems 19, 23 and 24, are false. It is still open whether these theorems are tru

    High Multiplicity Scheduling with Switching Costs for few Products

    Full text link
    We study a variant of the single machine capacitated lot-sizing problem with sequence-dependent setup costs and product-dependent inventory costs. We are given a single machine and a set of products associated with a constant demand rate, maximum loading rate and holding costs per time unit. Switching production from one product to another incurs sequencing costs based on the two products. In this work, we show that by considering the high multiplicity setting and switching costs, even trivial cases of the corresponding "normal" counterparts become non-trivial in terms of size and complexity. We present solutions for one and two products.Comment: 10 pages (4 appendix), to be published in Operations Research Proceedings 201

    Cyclic Lot-Sizing Problems with Sequencing Costs

    Get PDF
    We study a single machine lot-sizing problem, where n types of products need to be scheduled on the machine. Each product is associated with a constant demand rate, maximum production rate and inventory costs per time unit. Every time when the machine switches production between products, sequencing costs are incurred. These sequencing costs depend both on the product the machine just produced and the product the machine is about to produce. The goal is to find a cyclic schedule minimizing total average costs, subject to the condition that all demands are satisfied. We establish the complexity of the problem and we prove a number of structural properties largely characterizing optimal solutions. Moreover, we present two algorithms approximating the optimal schedules by augmenting the problem input. Due to the high multiplicity setting, even trivial cases of the corresponding conventional counterparts become highly non-trivial with respect to the output sizes and computational complexity, even without sequencing costs. In particular, the length of an optimal solution can be exponential in the input size of the problem. Nevertheless, our approximation algorithms produce schedules of a polynomial length and with a good quality compared to the optimal schedules of exponential length

    Exact and Approximation Algorithms for Routing a Convoy Through a Graph

    Get PDF
    We study routing problems of a convoy in a graph, generalizing the shortest path problem (SPP), the travelling salesperson problem (TSP), and the Chinese postman problem (CPP) which are all well-studied in the classical (non-convoy) setting. We assume that each edge in the graph has a length and a speed at which it can be traversed and that our convoy has a given length. While the convoy moves through the graph, parts of it can be located on different edges. For safety requirements, at all time the whole convoy needs to travel at the same speed which is dictated by the slowest edge on which currently a part of the convoy is located. For Convoy-SPP, we give a strongly polynomial time exact algorithm. For Convoy-TSP, we provide an O(log n)-approximation algorithm and an O(1)-approximation algorithm for trees. Both results carry over to Convoy-CPP which - maybe surprisingly - we prove to be NP-hard in the convoy setting. This contrasts the non-convoy setting in which the problem is polynomial time solvable

    Posted Price Mechanisms and Optimal Threshold Strategies for Random Arrivals

    Get PDF
    The classic prophet inequality states that, when faced with a finite sequence of non-negative independent random variables, a gambler who knows their distribution and is allowed to stop the sequence at any time, can obtain, in expectation, at least half as much reward as a prophet who knows the values of each random variable and can choose the largest one. In this work we consider the situation in which the sequence comes in random order. We look at both a non-adaptive and an adaptive version of the problem. In the former case the gambler sets a threshold for every random variable a priori, while in the latter case the thresholds are set when a random variable arrives. For the non-adaptive case, we obtain an algorithm achieving an expected reward within at least a 1-1/e fraction of the expected maximum and prove this constant is optimal. For the adaptive case with i.i.d. random variables, we obtain a tight 0.745-approximation, solving a problem posed by Hill and Kertz in 1982. We also apply these prophet inequalities to posted price mechanisms, and prove the same tight bounds for both a non-adaptive and an adaptive posted price mechanism when buyers arrive in random order

    The Secretary Problem with Independent Sampling

    Get PDF
    In the secretary problem we are faced with an online sequence of elements with values. Upon seeing an element we have to make an irrevocable take-it-or-leave-it decision. The goal is to maximize the probability of picking the element of maximum value. The most classic version of the problem is that in which the elements arrive in random order and their values are arbitrary. However, by varying the available information, new interesting problems arise. Also the case in which the arrival order is adversarial instead of random leads to interesting variants that have been considered in the literature. In this paper we study both the random order and adversarial order secretary problems with an additional twist. The values are arbitrary, but before starting the online sequence we independently sample each element with a fixed probability pp. The sampled elements become our information or history set and the game is played over the remaining elements. We call these problems the random order secretary problem with pp-sampling (ROSpp for short) and the adversarial order secretary problem with pp-sampling (AOSpp for short). Our main result is to obtain best possible algorithms for both problems and all values of pp. As pp grows to 1 the obtained guarantees converge to the optimal guarantees in the full information case. In the adversarial order setting, the best possible algorithm turns out to be a simple fixed threshold algorithm in which the optimal threshold is a function of pp only. In the random order setting we prove that the best possible algorithm is characterized by a fixed sequence of time thresholds, dictating at which point in time we should start accepting a value that is both a maximum of the online sequence and has a given ranking within the sampled elements.Comment: 41 pages, 2 figures, shorter version published in proceedings of SODA2

    Logical Imputation to Optimize Prognostic Risk Classification in Metastatic Renal Cell Cancer

    Full text link
    BACKGROUND: Application of the MSKCC and IMDC models is recommended for prognostication in metastatic renal cell cancer (mRCC). Patient classification in MSKCC and IMDC risk groups in real-world observational studies is often hampered by missing data on required pre-treatment characteristics. OBJECTIVES: To evaluate the effect of application of easy-to-use logical, or deductive, imputation on MSKCC and IMDC risk classification in an observational study setting. PATIENTS AND METHODS: We used data on 713 mRCC patients with first-line sunitinib treatment from our observational European multi-centre study EuroTARGET. Pre-treatment characteristics and follow-up were derived from medical files. Hospital-specific cut-off values for laboratory measurements were requested. The effect of logical imputation of missing data and consensus versus hospital-specific cut-off values on patient classification and the subsequent models' predictive performance for progression-free and overall survival (OS) was evaluated. RESULTS: 45% of the patients had missing data for >= 1 pre-treatment characteristic for either model. Still, 72% of all patients could be unambiguously classified using logical imputation. Use of consensus instead of hospital-specific cut-offs led to a shift in risk group for 12% and 7% of patients for the MSKCC and IMDC model, respectively. Using logical imputation or other cut-offs did not influence the models' predictive performance. These were in line with previous reports (c-statistic similar to 0.64 for OS). CONCLUSIONS: Logical imputation leads to a substantial increase in the proportion of patients that can be correctly classified into poor and intermediate MSKCC and IMDC risk groups in observational studies and its use in the field should be advocated

    Common variants at 12p11, 12q24, 9p21, 9q31.2 and in ZNF365 are associated with breast cancer risk for BRCA1 and/or BRCA2 mutation carriers

    Get PDF
    Abstract Introduction Several common alleles have been shown to be associated with breast and/or ovarian cancer risk for BRCA1 and BRCA2 mutation carriers. Recent genome-wide association studies of breast cancer have identified eight additional breast cancer susceptibility loci: rs1011970 (9p21, CDKN2A/B), rs10995190 (ZNF365), rs704010 (ZMIZ1), rs2380205 (10p15), rs614367 (11q13), rs1292011 (12q24), rs10771399 (12p11 near PTHLH) and rs865686 (9q31.2). Methods To evaluate whether these single nucleotide polymorphisms (SNPs) are associated with breast cancer risk for BRCA1 and BRCA2 carriers, we genotyped these SNPs in 12,599 BRCA1 and 7,132 BRCA2 mutation carriers and analysed the associations with breast cancer risk within a retrospective likelihood framework. Results Only SNP rs10771399 near PTHLH was associated with breast cancer risk for BRCA1 mutation carriers (per-allele hazard ratio (HR) = 0.87, 95% CI: 0.81 to 0.94, P-trend = 3 × 10-4). The association was restricted to mutations proven or predicted to lead to absence of protein expression (HR = 0.82, 95% CI: 0.74 to 0.90, P-trend = 3.1 × 10-5, P-difference = 0.03). Four SNPs were associated with the risk of breast cancer for BRCA2 mutation carriers: rs10995190, P-trend = 0.015; rs1011970, P-trend = 0.048; rs865686, 2df-P = 0.007; rs1292011 2df-P = 0.03. rs10771399 (PTHLH) was predominantly associated with estrogen receptor (ER)-negative breast cancer for BRCA1 mutation carriers (HR = 0.81, 95% CI: 0.74 to 0.90, P-trend = 4 × 10-5) and there was marginal evidence of association with ER-negative breast cancer for BRCA2 mutation carriers (HR = 0.78, 95% CI: 0.62 to 1.00, P-trend = 0.049). Conclusions The present findings, in combination with previously identified modifiers of risk, will ultimately lead to more accurate risk prediction and an improved understanding of the disease etiology in BRCA1 and BRCA2 mutation carriers

    Approximation Algorithms in Allocation, Scheduling and Pricing

    Get PDF
    This dissertation examines four optimisation problems. The first chapter examines the optimisation of network usage to prevent traffic congestion and internet data issues. The second chapter describes cyclical production planning for machines, such as those used to process huge volumes of lycra. In the third chapter, tasks that involve several components (such as processing time and memory use) are planned on computers so that no single computer is overloaded. The fourth chapter describes strategies for hiring highly skilled employees and strategies to increase the profit margins of a product. A theoretical lower limit is defined for the complexity of each problem and an algorithm is developed to approach this lower limit
    corecore